昨天的章節中,我們已經初步的了解多種部署策略的原理,但只看不做可感受不到部署策略的魅力。在接下來的兩天,我們將透過大量實作來熟悉它們。
下面我們會透過原生 Kubernetes 實作 Recreate 部署策略 。
組態檔案: my-app-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app-svc
labels:
app: my-app-svc
spec:
type: NodePort
selector:
app: my-app
ports:
- name: http
port: 8080
targetPort: 8080
nodePort: 30000
組態檔案: my-app-v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
strategy:
type: Recreate
template:
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: my-app
image: lofairy/foo
ports:
- name: http
containerPort: 8080
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /
port: http
periodSeconds: 5
組態檔案: my-app-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
strategy:
type: Recreate
template:
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: my-app
image: lofairy/bar
ports:
- name: http
containerPort: 8080
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /
port: http
periodSeconds: 5
v1 與 v2 兩個 Deployment 幾乎一樣,唯一的差異是使用了不同版本的映像檔:
lofairy/foo
: 訪問該映像檔建立的應用,會得到回應 {"message": "foo"}
lofairy/bar
: 訪問該映像檔建立的應用,會得到回應 {"message": "bar"}
接下來按照下面的步驟來驗證Recreate
策略:
kubectl apply -f my-app-v1.yaml my-app-svc.yaml
my-app
是否部署成功curl http://0.0.0.0:30000
---
{
"message": "foo"
}
可以看到 v1 的應用正常運行了。
為了查看部署的變化,我們進行以下動作:
t1
,運行以下命令來持續訪問 Servicewhile sleep 0.1; do curl http://0.0.0.0:30000; echo ""; done
kubectl apply -f my-app-v2.yaml
t1
,查看結果[...]
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
curl: (52) Empty reply from server
curl: (52) Empty reply from server
curl + c
中斷命令,重新執行持續訪問的命令^C
while sleep 0.1; do curl http://0.0.0.0:30000; echo ""; done
{"message":"bar"}
{"message":"bar"}
{"message":"bar"}
{"message":"bar"}
[...]
可以看到部署 v2 時,服務被中斷了。當我們再次訪問時,服務已經從 v1 切換到 v2。
my-app
Deployment 事件kubectl get events --field-selector involvedObject.kind=Deployment,involvedObject.name=my-app
結果如下
LAST SEEN TYPE REASON OBJECT MESSAGE
51s Normal ScalingReplicaSet deployment/my-app Scaled down replica set my-app-7f67f6f7fb to 0 from 3
51s Normal ScalingReplicaSet deployment/my-app Scaled up replica set my-app-678d754bb9 to 3 from 0
可以看到,該策略在部署時,會同時 scaled down 所有舊版本的副本,並同時 scale up 所有新版本的副本。所以停機時間取決於應用程式的關閉和啟動消耗的時間。
下面我們會透過原生 Kubernetes 實作 Rolling update 部署策略 。
組態檔案: my-app-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app-svc
labels:
app: my-app-svc
spec:
type: NodePort
selector:
app: my-app
ports:
- name: http
port: 8080
targetPort: 8080
nodePort: 30000
組態檔案: my-app-v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
# maxUnavailable設定為0可以完全確保在滾動更新期間服務不受影響
maxUnavailable: 0
template:
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: my-app
image: lofairy/foo
ports:
- name: http
containerPort: 8080
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /
port: http
# 初始延遲設定高點可以更好地觀察滾動更新過程
initialDelaySeconds: 15
periodSeconds: 5
組態檔案: my-app-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
# maxUnavailable設定為0可以完全確保在滾動更新期間服務不受影響
maxUnavailable: 0
template:
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: my-app
image: lofairy/foo
ports:
- name: http
containerPort: 8080
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /
port: http
# 初始延遲設定高點可以更好地觀察滾動更新過程
initialDelaySeconds: 15
periodSeconds: 5
組態檔案我們設定 spec.strategy.type=RollingUpdate
使 Deployment 使用滾動更新的策略來更新應用,並且調整探針的 initialDelaySeconds
方便我們觀察滾動更新的過程。
接下來我們按下面的步驟來驗證 Rolling Update
策略:
kubectl apply -f my-app-v1.yaml my-app-svc.yaml
my-app
是否部署成功curl localhost:30000
---
{
"message": "foo"
}
可以看到 v1 的應用正常運行了
為了查看部署的變化,我們進行以下動作:
t1
,運行以下命令來持續訪問 Servicewhile sleep 0.1; do curl http://0.0.0.0:30000; echo ""; done
kubectl apply -f my-app-v2.yaml
t1
,查看結果{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
[...]
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"foo"}
{"message":"bar"}
[...]
{"message":"foo"}
{"message":"bar"}
{"message":"bar"}
{"message":"bar"}
{"message":"bar"}
{"message":"bar"}
{"message":"bar"}
{"message":"bar"}
{"message":"bar"}
可以看到部署 v2 時,回應從全部都是 {"message":"foo"}
變成 {"message":"foo"}
, {"message":"bar"}
共存,最後全部都變為 {"message":"bar"}
。而且 v1 與 v2 回應的比例平穩且緩慢的爬升下降。
my-app
Deployment 事件,使用以下命令kubectl get events --field-selector involvedObject.kind=Deployment,involvedObject.name=my-app
結果如下
LAST SEEN TYPE REASON OBJECT MESSAGE
90s Normal ScalingReplicaSet deployment/my-app Scaled up replica set my-app-5d8f87f974 to 1
70s Normal ScalingReplicaSet deployment/my-app Scaled down replica set my-app-678d754bb9 to 2 from 3
70s Normal ScalingReplicaSet deployment/my-app Scaled up replica set my-app-5d8f87f974 to 2 from 1
49s Normal ScalingReplicaSet deployment/my-app Scaled down replica set my-app-678d754bb9 to 1 from 2
49s Normal ScalingReplicaSet deployment/my-app Scaled up replica set my-app-5d8f87f974 to 3 from 2
29s Normal ScalingReplicaSet deployment/my-app Scaled down replica set my-app-678d754bb9 to 0 from 1
可以看到,該策略在部署時,會 scaled down 一個 v1 副本後,再 scale up 一個 v2 副本,直到部署完成。所以中間不會出現停機時間。
部署的過程中,我們使用以下指令暫停滾動更新
kubectl rollout pause deploy my-app
檢查 Deployment 詳細資訊
kubectl describe deploy my-app
結果如下:
Name: my-app
[...]
Replicas: 3 desired | 2 updated | 4 total | 4 available | 0 unavailable
StrategyType: RollingUpdate
[...]
RollingUpdateStrategy: 0 max unavailable, 1 max surge
[...]
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing Unknown DeploymentPaused
OldReplicaSets: my-app-678d754bb9 (2/2 replicas created)
NewReplicaSet: my-app-5d8f87f974 (2/2 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m18s deployment-controller Scaled up replica set my-app-678d754bb9 to 3
Normal ScalingReplicaSet 69s deployment-controller Scaled up replica set my-app-5d8f87f974 to 1
Normal ScalingReplicaSet 49s deployment-controller Scaled down replica set my-app-678d754bb9 to 2 from 3
Normal ScalingReplicaSet 49s deployment-controller Scaled up replica set my-app-5d8f87f974 to 2 from 1
可以看到 v1 和 v2 副本同時存在 2 個,並且 Conditions => Progressing
為 DeploymentPaused
使用以下指令繼續滾動更新
kubectl rollout resume deploy my-app
如果在滾動更新過程中發現新版本應用有問題,我們可以通過以下指令進行回滾:
kubectl rollout undo deploy my-app
my-app
Deployment 事件kubectl get events --field-selector involvedObject.kind=Deployment,involvedObject.name=my-app
結果如下
LAST SEEN TYPE REASON OBJECT MESSAGE
3m26s Normal ScalingReplicaSet deployment/my-app Scaled up replica set my-app-69c99b4757 to 3
2m55s Normal ScalingReplicaSet deployment/my-app Scaled up replica set my-app-5d8f87f974 to 1
2m35s Normal ScalingReplicaSet deployment/my-app Scaled down replica set my-app-69c99b4757 to 2 from 3
2m35s Normal ScalingReplicaSet deployment/my-app Scaled up replica set my-app-5d8f87f974 to 2 from 1
2m15s Normal ScalingReplicaSet deployment/my-app Scaled down replica set my-app-69c99b4757 to 1 from 2
2m15s Normal ScalingReplicaSet deployment/my-app Scaled up replica set my-app-5d8f87f974 to 3 from 2
114s Normal ScalingReplicaSet deployment/my-app Scaled down replica set my-app-69c99b4757 to 0 from 1
79s Normal ScalingReplicaSet deployment/my-app Scaled up replica set my-app-69c99b4757 to 1 from 0
59s Normal ScalingReplicaSet deployment/my-app Scaled down replica set my-app-5d8f87f974 to 2 from 3
18s Normal ScalingReplicaSet deployment/my-app (combined from similar events): Scaled down replica set my-app-5d8f87f974 to 0 from 1
可以看到,使用 Rolling update 部署策略時,回滾也是使用同樣的策略。
Kubernetes 部署策略详解-阳明的博客|Kubernetes|Istio|Prometheus|Python|Golang|云原生 (qikqiak.com)